ethical ai team
Welp, There Goes Twitter's Ethical AI Team, Among Others as Employees Post Final Messages
Twitter began its highly anticipated mass layoffs Thursday evening by eliminating its top internal critics. Twitter's Ethical AI team, according to former staff, is no more. Multiple members of Twitter's Machine Learning, Ethics, Transparency and Accountability (META) team, including its former leader, posted on Twitter saying they were no longer at the company. At least one of the former workers suggested the entire team was being disbanded. The apparent layoffs impacting the company's strongest internal watchdog group comes as thousands more brace for cuts potentially impacting around half of the company's staff according to previous reports.
Welp, There Goes Twitter's Ethical AI Team, Among Others as Employees Post Final Messages
Twitter began its highly anticipated mass layoffs Thursday evening by eliminating its top internal critics. Twitter's Ethical AI team, according to former staff, is no more. Multiple members of Twitter's Machine Learning, Ethics, Transparency and Accountability (META) team, including its former leader, posted on Twitter saying they were no longer at the company. At least one of the former workers suggested the entire team was being disbanded. The apparent layoffs impacting the company's strongest internal watchdog group comes as thousands more brace for cuts potentially impacting around half of the company's staff according to previous reports.
Leaving ethical AI researcher describes 'the rot' inside Google
In her resignation letter Wednesday, Google ethical AI researcher Alex Hanna accused the company of having deep "rot" in its internal culture, "maintain[ing] white supremacy behind the veneer of race-neutrality" and being a workplace where those with "little interest in mitigating the worst harms" of its products are promoted at "lightning speed." "I am quitting because I'm tired," Hanna wrote, announcing that she is joining the research institution recently founded by Timnit Gebru, the prominent AI ethicist who previously co-led Google's ethical AI team. Gebru was fired from the company in 2020 after raising concerns about natural-language processing. Hanna will be accompanied by Dylan Baker, a software engineer who also resigned Wednesday, in joining Gebru's Distributed Artificial Intelligence Research Institute. "When I joined Google, I was cautiously optimistic about the promise of making the world better with technology. I'm a lot less techno-solutionist now," Baker wrote in a separate letter, in which they wrote about the "cognitive dissonance" of working for a place where full-time employees and contract workers received such different benefits.
Alphabet Workers Union Giving Structure to Activism at Google - AI Trends
In a highly unusual undertaking in the technology industry, the Alphabet Workers Union was formed by over 400 Google engineers and other workers in early January. The union now has about 800 members. The Alphabet Workers Union is a minority union, representing a fraction of the company's more than 260,000 full-time employees and contractors. The workers stated at the outset that it was primarily an effort to give structure to activism at Google, rather than to negotiate for a contract. The union is affiliated with the Communications Workers of America (CWA), a union representing workers in telecommunications and media in the US and Canada.
- North America > United States > California (0.05)
- North America > Canada > Alberta (0.05)
- Europe > Poland > Lesser Poland Province > Kraków (0.05)
- Asia > India (0.05)
- Law > Labor & Employment Law (1.00)
- Information Technology > Services (1.00)
Taking On Tech: Dr. Timnit Gebru Exposes The Underbelly Of Performative Diversity In The Tech Industry
SAN FRANCISCO, CA - SEPTEMBER 07: Google AI Research Scientist Timnit Gebru speaks onstage during ... [ ] Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. 'Taking On Tech is an informative series that explores artificial intelligence, data science, algorithms, and mass censorship. In this inaugural report, For(bes) The Culture kicks things off with Dr. Timnit Gebru, a former researcher and co-lead of Google's Ethical AI team. When Gebru was forced out of Google after refusing to retract a research paper that was already cleared by Google's internal review process, a conversation about the tech industry's inherent diversity problem resurfaced. The paper raised concerns on algorithmic bias in machine learning and the latent perils that AI presents for marginalized communities. Around 1,500 Google employees signed a letter in protest, calling for accountability and answers over her unethical firing.
- Information Technology > Services (0.69)
- Law > Civil Rights & Constitutional Law (0.68)
Google Artificial Intelligence Team Draws From Critical Race Theory, Internal Document Shows
Google's artificial intelligence (AI) work draws from Critical Race Theory, a philosophical framework that posits that nearly every interaction should be seen as a racial power struggle and seeks to "disrupt" American society which it views as immutably racist, according to a company document obtained by The Daily Wire. A screenshot of an internal company page, obtained by The Daily Wire, says under the header "Ethical AI": We focus on AI at the intersection of Machine Learning and society, developing projects that inform the general public; bringing the complexities of individual identity into the development of human-centric AI; and creating ways to measure different kinds of biases and stereotypes. Out [sic] work includes lessons from gender studies, critical race theory, computational linguistics, computer vision, engineering education, and beyond! Google's Ethical AI team appears intent on encoding far-left ideology into its algorithms even after previous leaders of the team plunged the section into chaos over their insistence on overlaying progressive politics onto mathematics. Until recently, the team was co-led by Timnit Gebru, who cofounded a "Black in AI" racial affinity group and in 2018 coauthored a paper saying facial recognition technology was less accurate at recognizing women and minorities.
- Education > Curriculum > Subject-Specific Education (0.56)
- Law > Civil Rights & Constitutional Law (0.37)
Inside the fight to reclaim AI from Big Tech's control
Timnit Gebru never thought a scientific paper would cause her so much trouble. In 2020, as the co-lead of Google's ethical AI team, Gebru had reached out to Emily Bender, a linguistics professor at the University of Washington, and asked to collaborate on research about the troubling direction of artificial intelligence. Gebru wanted to identify the risks posed by large language models, one of the most stunning recent breakthroughs in AI research. The models are algorithms trained on staggering amounts of text. Under the right conditions, they can compose what look like convincing passages of prose.
Google says it's committed to ethical AI research. Its ethical AI team isn't so sure.
Six months after star AI ethics researcher Timnit Gebru said Google fired her over an academic paper scrutinizing a technology that powers some of the company's key products, the company says it's still deeply committed to ethical AI research. It promised to double its research staff studying responsible AI to 200 people, and CEO Sundar Pichai has pledged his support to fund more ethical AI projects. Jeff Dean, the company's head of AI, said in May that while the controversy surrounding Gebru's departure was a "reputational hit," it's time to move on. But some current members of Google's tightly knit ethical AI group told Recode the reality is different from the one Google executives are publicly presenting. The 10-person group, which studies how artificial intelligence impacts society, is a subdivision of Google's broader new responsible AI organization.
The Future Of Work Now: Ethical AI At Salesforce
In September 2016, Salesforce founder and CEO Marc Benioff informed employees, customers, and investors that Salesforce would be an AI-driven company. Earlier that year, Microsoft released its Tay research chatbot project through a Twitter Account. Microsoft shut down Tay after only 16 hours because it started to mimic the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of such inappropriate behavior. With chatbots as one of Salesforce's most promising customer service-related technologies, Kathy Baxter, in her role at that time as Principal User Researcher, was curious about understanding what went wrong with Tay. She also wanted to know how that type of AI-enabled system behavior could be avoided at Salesforce.
Google offered a professor $60,000, but he turned it down. Here's why
When Luke Stark sought money from Google in November he had no idea he'd be turning down $60,000 from the tech giant in March. Stark, an assistant professor at Western University in Ontario, Canada, studies the social and ethical impacts of artificial intelligence. In late November, he applied for a Google Research Scholar award, a no-strings-attached research grant of up to $60,000 to support professors who are early in their careers. He put in for the award, he said, "because of my sense at the time that Google was building a really strong, potentially industry-leading ethical AI team." Soon after, that feeling began to dissipate.
- North America > Canada > Ontario (0.25)
- North America > United States > Texas > Travis County > Austin (0.05)